Search Results: "chip"

8 December 2022

Reproducible Builds: Reproducible Builds in November 2022

Welcome to yet another report from the Reproducible Builds project, this time for November 2022. In all of these reports (which we have been publishing regularly since May 2015) we attempt to outline the most important things that we have been up to over the past month. As always, if you interested in contributing to the project, please visit our Contribute page on our website.

Reproducible Builds Summit 2022 Following-up from last month s report about our recent summit in Venice, Italy, a comprehensive report from the meeting has not been finalised yet watch this space! As a very small preview, however, we can link to several issues that were filed about the website during the summit (#38, #39, #40, #41, #42, #43, etc.) and collectively learned about Software Bill of Materials (SBOM) s and how .buildinfo files can be seen/used as SBOMs. And, no less importantly, the Reproducible Builds t-shirt design has been updated

Reproducible Builds at European Cyber Week 2022 During the European Cyber Week 2022, a Capture The Flag (CTF) cybersecurity challenge was created by Fr d ric Pierret on the subject of Reproducible Builds. The challenge consisted in a pedagogical sense based on how to make a software release reproducible. To progress through the challenge issues that affect the reproducibility of build (such as build path, timestamps, file ordering, etc.) were to be fixed in steps in order to get the final flag in order to win the challenge. At the end of the competition, five people succeeded in solving the challenge, all of whom were awarded with a shirt. Fr d ric Pierret intends to create similar challenge in the form of a how to in the Reproducible Builds documentation, but two of the 2022 winners are shown here:

On business adoption and use of reproducible builds Simon Butler announced on the rb-general mailing list that the Software Quality Journal published an article called On business adoption and use of reproducible builds for open and closed source software. This article is an interview-based study which focuses on the adoption and uses of Reproducible Builds in industry, with a focus on investigating the reasons why organisations might not have adopted them:
[ ] industry application of R-Bs appears limited, and we seek to understand whether awareness is low or if significant technical and business reasons prevent wider adoption.
This is achieved through interviews with software practitioners and business managers, and touches on both the business and technical reasons supporting the adoption (or not) of Reproducible Builds. The article also begins with an excellent explanation and literature review, and even introduces a new helpful analogy for reproducible builds:
[Users are] able to perform a bitwise comparison of the two binaries to verify that they are identical and that the distributed binary is indeed built from the source code in the way the provider claims. Applied in this manner, R-Bs function as a canary, a mechanism that indicates when something might be wrong, and offer an improvement in security over running unverified binaries on computer systems.
The full paper is available to download on an open access basis. Elsewhere in academia, Beatriz Michelson Reichert and Rafael R. Obelheiro have published a paper proposing a systematic threat model for a generic software development pipeline identifying possible mitigations for each threat (PDF). Under the Tampering rubric of their paper, various attacks against Continuous Integration (CI) processes:
An attacker may insert a backdoor into a CI or build tool and thus introduce vulnerabilities into the software (resulting in an improper build). To avoid this threat, it is the developer s responsibility to take due care when making use of third-party build tools. Tampered compilers can be mitigated using diversity, as in the diverse double compiling (DDC) technique. Reproducible builds, a recent research topic, can also provide mitigation for this problem. (PDF)

Misc news
On our mailing list this month:

Debian & other Linux distributions Over 50 reviews of Debian packages were added this month, another 48 were updated and almost 30 were removed, all of which adds to our knowledge about identified issues. Two new issue types were added as well. [ ][ ]. Vagrant Cascadian announced on our mailing list another online sprint to help clear the huge backlog of reproducible builds patches submitted by performing NMUs (Non-Maintainer Uploads). The first such sprint took place on September 22nd, but others were held on October 6th and October 20th. There were two additional sprints that occurred in November, however, which resulted in the following progress: Lastly, Roland Clobus posted his latest update of the status of reproducible Debian ISO images on our mailing list. This reports that all major desktops build reproducibly with bullseye, bookworm and sid as well as that no custom patches needed to applied to Debian unstable for this result to occur. During November, however, Roland proposed some modifications to live-setup and the rebuild script has been adjusted to fix the failing Jenkins tests for Debian bullseye [ ][ ].
In other news, Miro Hron ok proposed a change to clamp build modification times to the value of SOURCE_DATE_EPOCH. This was initially suggested and discussed on a devel@ mailing list post but was later written up on the Fedora Wiki as well as being officially proposed to Fedora Engineering Steering Committee (FESCo).

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 226 and 227 to Debian:
  • Support both python3-progressbar and python3-progressbar2, two modules providing the progressbar Python module. [ ]
  • Don t run Python decompiling tests on Python bytecode that file(1) cannot detect yet and Python 3.11 cannot unmarshal. (#1024335)
  • Don t attempt to attach text-only differences notice if there are no differences to begin with. (#1024171)
  • Make sure we recommend apksigcopier. [ ]
  • Tidy generation of os_list. [ ]
  • Make the code clearer around generating the Debian substvars . [ ]
  • Use our assert_diff helper in test_lzip.py. [ ]
  • Drop other copyright notices from lzip.py and test_lzip.py. [ ]
In addition to this, Christopher Baines added lzip support [ ], and FC Stegerman added an optimisation whereby we don t run apktool if no differences are detected before the signing block [ ].
A significant number of changes were made to the Reproducible Builds website and documentation this month, including Chris Lamb ensuring the openEuler logo is correctly visible with a white background [ ], FC Stegerman de-duplicated by email address to avoid listing some contributors twice [ ], Herv Boutemy added Apache Maven to the list of affiliated projects [ ] and boyska updated our Contribute page to remark that the Reproducible Builds presence on salsa.debian.org is not just the Git repository but is also for creating issues [ ][ ]. In addition to all this, however, Holger Levsen made the following changes:
  • Add a number of existing publications [ ][ ] and update metadata for some existing publications as well [ ].
  • Hide draft posts on the website homepage. [ ]
  • Add the Warpforge build tool as a participating project of the summit. [ ]
  • Clarify in the footer that we welcome patches to the website repository. [ ]

Testing framework The Reproducible Builds project operates a comprehensive testing framework at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, the following changes were made by Holger Levsen:
  • Improve the generation of meta package sets (used in grouping packages for reporting/statistical purposes) to treat Debian bookworm as equivalent to Debian unstable in this specific case [ ] and to parse the list of packages used in the Debian cloud images [ ][ ][ ].
  • Temporarily allow Frederic to ssh(1) into our snapshot server as the jenkins user. [ ]
  • Keep some reproducible jobs Jenkins logs much longer [ ] (later reverted).
  • Improve the node health checks to detect failures to update the Debian cloud image package set [ ][ ] and to improve prioritisation of some kernel warnings [ ].
  • Always echo any IRC output to Jenkins output as well. [ ]
  • Deal gracefully with problems related to processing the cloud image package set. [ ]
Finally, Roland Clobus continued his work on testing Live Debian images, including adding support for specifying the origin of the Debian installer [ ] and to warn when the image has unmet dependencies in the package list (e.g. due to a transition) [ ].
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. You can get in touch with us via:

Shirish Agarwal: Wayland, Hearing aids, Multiverse & Identity

Wayland First up, I read Antoine Beaupr s Wayland to Sway migration with interest. While he said it s done and dusted or something similar, the post shows there s still quite a ways to go. I wouldn t say it s done or whatever till it s integrated so well that a person installs it and doesn t really need to fiddle with config files as an average user. For specific use-cases you may need to, but that should be outside of a normal user (layperson) experience. I have been using mate for a long long time and truth be told been very happy with it. The only thing I found about Wayland on mate is this discussion or rather this entry. The roadmap on Ubuntu Mate is also quite iffy. The Mate Wayland entry on Debian wiki also perhaps need an updation but dunno much as the latest update it shares is 2019 and it s 2022. One thing to note, at least according to Antoine, things should be better as and when it gets integrated even on legacy hardware. I would be interested to know how it would work on old desktops and laptops rather than new or is there some barrier? I, for one would have liked to see or know about why lightdm didn t work on Wayland and if there s support. From what little I know lightdm is much lighter than gdm3 and doesn t require much memory and from what little I have experienced works very well with mate. I have been using it since 2015/16 although the Debian changelog tells me that it has been present since 2011. I was hoping to see if there was a Wayland specific mailing list, something like debian-wayland but apparently there s not :(. Using mate desktop wayland (tried few other variations on the keywords) but search fails to find any meaningful answer :(. FWIW and I don t know the reason why but Archwiki never fails to amaze me. Interestingly, it just says No for mate. I probably would contact upstream in the coming days to know what their plans are and hopefully they will document what their plans are on integrating Wayland in both short-term and long-term with an update, or if there is something more recent they have documented elsewhere, get that update on the Debian wiki so people know. The other interesting thread I read was Russel Coker s Thinkpad X1 Carbon Gen5 entry. I would be in the market in a few months to find/buy a Thinkpad but probably of AMD rather than Intel because part of recent past history with Intel as well as AMD having a bit of an edge over Intel as far as graphics is concerned. I wonder why Russel was looking into Intel and not AMD. Would be interested to know why Intel and not AMD? Any specific reason ???

Hearing Aids I finally bought hearing aids about a couple of weeks back and have been practicing using them. I was able to have quite a few conversations although music is still I m not able to listen clearly but it is still a far cry from before and for the better. I am able to have conversations with people and also reply and they do not have to make that extra effort that they needed to. Make things easier for everybody. The one I bought is at the starting range although the hearing aids go all the way to 8 lakhs for a pair (INR 800,000), the more expensive ones having WiFi, Bluetooth and more channels, it all depends on how much can one afford. And AFAIK there is not a single Indian manufacturer who is known in this business.

One thing I did notice is while the hearing aids are remarkably sturdy if they fall down as they are small, yet you have to be careful of both dust and water . That does makes life a bit difficult as my house and city both gets sand quite a bit everyday. I don t think they made any India-specific changes, if they had, would probably make things better. I haven t yet looked at it, but it may be possible to hack it remotely. There may or may not be security issues involved, probably would try once I ve bit more time am bit more comfortable to try and see what I can find out. If I had bought it before, maybe I would have applied for the Debian event happening in Kerala, if nothing else, would have been to document what happened there in detail.  I probably would have to get a new motherboard for my desktop probably in a year or two as quite a few motherboards also have WiFi (WiFi 6 ?) think on the southbridge. I at least would have a look in new year and know more as to what s been happening. For last at least 2-3 years there has been a rumor which has been confirmed time and again that the Tata Group has been in talks with multiple vendors to set chip fabrication and testing business but to date they haven t been able to find one. They do keep on giving press conferences about the same but that s all they do :(. Just shared the latest one above.

The Long War Terry Pratchett, Stephen Braxter Long Earth Terry Pratchett, Stephen Braxter ISBN13: 9780062067777 Last month there was also a seconds books sale where I was lucky enough to get my hands on the Long War. But before I share about the book itself, I had a discussion with another of my friends and had to re-share part of that conversation. While the gentleman was adamant that non-fiction books are great, my point as always is both are equal. As I shared perhaps on this blog itself, perhaps multiple times, that I had seen a YT video in which a professor shared multiple textbooks of physics and shared how they are wrong and have been wrong and kept them in a specific corner. He took the latest book which he honestly said doesn t have any mistakes as far as he know and yet still kept in that same corner denoting that it is highly possible that future understanding will make the knowledge or understanding we know different. An example of physics in the nano world and how that is different and basically turns our understanding than what we know. Now as far as the book is concerned, remember Michael Crichton s Timeline. Now that book was originally written in the 1960 s while this one was written by both the honorable gentleman in 2013. So almost 50+ years difference between the two books, and that even shows how they think about things. In this book, you no longer need a big machine, but have something called a stepper machine which is say similar to a cellphone, that size and that frame, thickness etc. In this one, the idea of multiverse is also there but done a tad differently. In this, we do not have other humans or copy humans but have multiple earths that may have same or different geography as how evolution happened. None of the multiverse earths have humans but have different species depending on the evolution that happened there. There are something called as trolls but they have a much different meaning and way about them about how most fantasy authors portray trolls. While they are big in this as well, they are as gentle as bears or rabbits. So the whole thing is about real estate and how humans have spread out on multiple earths and the politics therein. Interestingly, the story was trashed or given negative reviews on Goodreads. The sad part is/was that it was written and published in 2013 when perhaps the possibility of war or anything like that was very remote especially in the States, but now we are now in 2022 and just had an insurrection happen and whole lot of Americans are radicalized, whether you see the left or the right depending on your ideology. An American did share few weeks ago how some shares are looking at Proportional Representation and that should make both parties come more towards the center and be a bit more transparent. What was interesting to me is the fact that states have much more rights to do elections and electioneering the way they want rather than a set model which everyone has common which is what happens in India. This also does poke holes into the whole Donald Trump stolen democracy drama but that s a different story altogether. One of the more interesting things I came to know about is that there are 4 books in the long series and this was the second book in itself. I do not want to dwell on the characters themselves as frankly speaking I haven t read all the four books and it would be gross injustice on my part to talk about the characters themselves. Did I enjoy reading the book, for sure. What was interesting and very true of human nature is that even if we have the ability or had the ability to have whole worlds to ourselves, we are bound to mess it up. And in that aspect, I don t think he is too far off the mark. If I had a whole world, wouldn t I try to exploit it to the best or worse of my ability. One of the more interesting topics in the book is the barter system they have thought of that is called as favors. If you are in multiple worlds, then having a currency, even fiat money is of no use and they have to find ways and means to trade with one another. The book also touches a bit on slavery but only just and doesn t really explore it as much as it could have.

Identity Now this has many meanings to it. Couple of weeks ago, saw a transgender meet. For the uninitiated or rather people like me, basically it is about people who are born in one gender but do not identify with it but the other and they express it first through their clothes and expression and the end of the journey perhaps is with having the organs but this may or may not be feasible, as such surgery is expensive and also not available everywhere. After section 377 was repealed few years ago, we do have a third gender on forms as well as have something called a Transgender Act but how much the needle has moved in society is still a question. They were doing a roadshow near my house hence I was able to talk with them with my new hearing aids and while there was lot of traffic was able to understand some of their issues. For e.g. they find it difficult to get houses on rent, but then it is similar for bachelor guys or girls also. One could argue to what degree it is, and that perhaps maybe. Also, there is a myth that they are somehow promiscuous but that I believe is neither here or there. Osho said an average person thinks about the opposite sex every few seconds or a minute. I am sure even Freud would have similar ideas. So, if you look in that way everybody is promiscuous as far as thought is concerned. The other part being opportunity but that again is function of so many other things. Some people are able to attract a lot of people, others might not. And then whether they chose to act on that opportunity or not is another thing altogether. Another word that is or was used is called gender fluid, but that too is iffy as gender fluid may or may not mean transgender. Also, while watching some nature documentary few days/weeks back had come to know that trees have something like 18 odd genders. That just blows me out of the mind and does re-question this whole idea of sexuality and identity to only two which seems somewhat regressive at least to me. If we think humans are part of nature, then we need to be open up perhaps a bit more. But identity as I shared above has more than one meaning. For e.g. citizenship, that one is born in India is even messier to know, understand and define. I had come across this article about couple of months back. Now think about this. Now, there have been studies and surveys about citizenship and it says something like 60% birth registrations are done in metro cities. Now Metro cities are 10 as defined by Indian state. But there are roughly an odd 4k cities in India and probably twice the number of villages and those are conservative numbers as we still don t record things meticulously, maybe due to the Indian oral tradition or just being lazy or both, one part is also that if you document people and villages and towns, then you are also obligated to give them some things as a state and that perhaps is not what the Indian state wants. A small village in India could be anywhere from few hundreds of people to a few thousand. And all the new interventions whether it is PAN, Aadhar has just made holes rather than making things better. They are not inclusive but exclusive. And none of this takes into account Indian character and the way things are done in India. In most households, excluding the celebs (they are in a world of pain altogether when it comes to baby names but then it s big business but that s an entire different saga altogether, so not going to touch that.) I would use or say my individual case as that is and seems to be something which is regular even today. I was given a nickname when I was 3 years old and given a name when I was 5-6 when I was put in school. I also came to know in school few kids who didn t like their names and couple of them cajoled and actually changed their names while they were kids, most of us just stayed with what we got. I do remember sharing about nakushi or something similar a name given to few girls in Maharashtra by their parents and the state intervened and changed their names. But that too is another story in itself. What I find most problematic is that the state seems to be blind, and this seems to be by design rather than a mistake. Couple of years back, Assam did something called NRC (National Register of Citizens) and by the Govt s own account it was a failure of massive proportions. And they still want to bring in CAA, screwing up Assam more. And this is the same Govt. went shown how incorrect it was, blamed it all on the High Court and it s the same Govt. that shopped around for judges to put somebody called Mr. Saibaba (an invalid 90 year adivasi) against whom the Govt. hasn t even a single proof as of date. Apparently, they went to 6 judges who couldn t give what the decision the Govt. wanted. All this info. is in public domain. So the current party ruling, i.e. BJP just wants to make more divisions rather than taking people along as they don t have answers either on economy, inflation or issues that people are facing. One bright light has been Rahul Gandhi who has been doing a padhyatra (walking) from Kanyakumari to Kashmir and has had tremendous success although mainstream media has showed almost nothing what he is doing or why he is doing that. Not only he had people following him, there are and were many who took his example and using the same values of inclusiveness are walking where they can. And this is not to do with just a political party but more with a political thought of inclusiveness, that we are one irrespective of what I believe, eat, wear etc. And that gentleman has been giving press conferences while our dear P.M. even after 8 years doesn t have the guts to do a single press conference. Before closing, I do want to take another aspect, Rahul Gandhi s mother is an Italian or was from Italy before she married. But for BJP she is still Italian. Rishi Sunak, who has become the UK Prime Minister they think of him as Indian and yet he has sworn using the Queen s name. And the same goes for Canada Kumar (Akshay Kumar) and many others. How the right is able to blind and deaf to what it thinks is beyond me. All these people have taken an oath in the name of the Queen and they have to be loyal to her or rather now King Charles III. The disconnect continues.

Russell Coker: Thinkpad X1 Carbon Gen5

Gen1 Since February 2018 I have been using a Thinkpad X1 Carbon Gen1 [1] as my main laptop. Generally I ve been very happy with it, it s small and light, has good performance for web browsing etc, and with my transition to doing all compiles etc on servers it works well. When I wrote my original review I was unhappy with the keyboard, but I got used to that and found it to be reasonably good. The things that I have found as limits on it are the display resolution as 1600*900 isn t that great by modern standards (most phones are a lot higher resolution), the size (slightly too large for the pocket of my Scott e Vest [2] jacket), and the lack of USB-C. Modern laptops can charge via USB-C/Thunderbolt while also doing USB and DisplayPort video over the same cable. USB-C monitors which support charging a laptop over the same cable as used for video input are becoming common (last time I checked the Dell web site for many models of monitor there was a USB-C one that cost about $100 more). I work at a company with lots of USB-C monitors and docks so being able to use my personal laptop with the same displays when on breaks is really handy. A final problem with the Gen1 is that it has a proprietary and unusual connector for the SSD which means that a replacement SSD costs about what I paid for the entire laptop. Ever since the SSD gave a BTRFS checksum error I ve been thinking of replacing it. Choosing a Replacement The Gen5 is the first Thinkpad X1 Carbon to have USB-C. For work I had used a Gen6 which was quite nice [3]. But it didn t seem to offer much over the Gen5. So I started looking for cheap Thinkpad X1 Carbons of Gen5+. A Cheap? Gen5 In July I saw an ebay advert for a Gen5 with FullHD display for $370 or nearest offer, with the downside being that the BIOS password had been lost. I offered $330 and the seller accepted, in retrospect that was unusually cheap and should have been a clue that I needed to do further investigation. It turned out that resetting the BIOS password is unusually difficult as it s in the TPM so the system would only boot Windows. When I learned that I should have sold the laptop to someone who wanted to run Windows and bought another. Instead I followed some instructions on the Internet about entering a wrong password multiple times to get to a password recovery screen, instead the machine locked up entirely and became unusable for windows (so don t do that). Then I looked for ways of fixing the motherboard. The cheapest was $75.25 for a replacement BIOS flash chip that had a BIOS that didn t check the validity of passwords. The aim was to solder that on, set a new password (with any random text being accepted as the old password), then solder the old one back on for normal functionality. It turned out that I m not good at fine soldering, after I had hacked at it a friend diagnosed the chip and motherboard to probably both be damaged (he couldn t get it going). The end solution was that my friend found a replacement motherboard for $170 from China. This gave a total cost of $575.25 for the laptop which is more than the usual price of a Gen6 and more than I expected to pay. In the past when advocating buying second hand or refurbished laptops people would say what happens if you get one that doesn t work properly , the answer to that question is that I paid a lot less than the new cost of $2700+ for a Thinkpad X1 Carbon and got a computer that does everything I need. One of the advantages of getting a cheap laptop is that I won t be so unhappy if I happen to drop it. A Cheap Gen6 After the failed experiment with a replacement BIOS on the Gen5 I was considering selling it for scrap. So I bought a Gen6 from Australian Computer Traders via Amazon for $390 in August. The advert clearly stated that it was for a laptop with USB-C and Thunderbolt (Gen5+ features) but they shipped me a Gen4 that didn t even have USB-C. They eventually refunded me but I will try to avoid buying from them again. Finally Working The laptop I now have has a i5-6300U CPU that rates 3242 on cpubenchmark.net. My Gen1 thinkpad has a i7-3667U CPU that rates 2378 on cpubenchmark.net, note that the cpubenchmark.net people have rescaled their benchmark since my review of the Gen1 in 2018. So according to the benchmarks my latest laptop is about 36% faster for CPU operations. Not much of a difference when comparing systems manufactured in 2012 and 2017! According to the benchmarks a medium to high end recent CPU will be more than 10* faster than the one in my Gen5 laptop, but such a CPU would cost more than my laptop cost. The storage is a 256G NVMe device that can do sustained reads at 900MB/s, that s not even twice as fast as the SSD in my Gen1 laptop although NVMe is designed to perform better for small IO. It has 2*USB-C ports both of which can be used for charging, which is a significant benefit over the Gen6 I had for work in 2018 which only had one. I don t know why Lenovo made Gen6 machines that were lesser than Gen5 in such an important way. It can power my Desklab portable 4K monitor [4] but won t send a DisplayPort signal over the same USB-C cable. I don t know if this is a USB-C cable issue or some problem with the laptop recognising displays. It works nicely with Dell USB-C monitors and docks that power the laptop over the same cable as used for DisplayPort. Also the HDMI port works with 4K monitors, so at worst I could connect my Desklab monitor via a USB-C cable for power and HDMI for data. The inability to change the battery without disassembly is still a problem, but hopefully USB-C connected batteries capable of charging such a laptop will become affordable in the near future and I have had some practice at disassembling this laptop. It still has the Ethernet dongle annoyance, and of course the seller didn t include that. But USB ethernet devices are quite good and I have a few of them. In conclusion it s worth the $575.25 I paid for it and would have been even better value for money if I had been a bit smarter when buying. It meets the initial criteria of USB-C power and display and of fitting in my jacket pocket as well as being slightly better than my old laptop in every other way.

26 November 2022

Matthew Garrett: Poking a mobile hotspot

I've been playing with an Orbic Speed, a relatively outdated device that only speaks LTE Cat 4, but the towers I can see from here are, uh, not well provisioned so throughput really isn't a concern (and refurbs are $18, so). As usual I'm pretty terrible at just buying devices and using them for their intended purpose, and in this case it has the irritating behaviour that if there's a power cut and the battery runs out it doesn't boot again when power returns, so here's what I've learned so far.

First, it's clearly running Linux (nmap indicates that, as do the headers from the built-in webserver). The login page for the web interface has some text reading "Open Source Notice" that highlights when you move the mouse over it, but that's it - there's code to make the text light up, but it's not actually a link. There's no exposed license notices at all, although there is a copy on the filesystem that doesn't seem to be reachable from anywhere. The notice tells you to email them to receive source code, but doesn't actually provide an email address.

Still! Let's see what else we can figure out. There's no open ports other than the web server, but there is an update utility that includes some interesting components. First, there's a copy of adb, the Android Debug Bridge. That doesn't mean the device is running Android, it's common for embedded devices from various vendors to use a bunch of Android infrastructure (including the bootloader) while having a non-Android userland on top. But this is still slightly surprising, because the device isn't exposing an adb interface over USB. There's also drivers for various Qualcomm endpoints that are, again, not exposed. Running the utility under Windows while the modem is connected results in the modem rebooting and Windows talking about new hardware being detected, and watching the device manager shows a bunch of COM ports being detected and bound by Qualcomm drivers. So, what's it doing?

Sticking the utility into Ghidra and looking for strings that correspond to the output that the tool conveniently leaves in the logs subdirectory shows that after finding a device it calls vendor_device_send_cmd(). This is implemented in a copy of libusb-win32 that, again, has no offer for source code. But it's also easy to drop that into Ghidra and discover thatn vendor_device_send_cmd() is just a wrapper for usb_control_msg(dev,0xc0,0xa0,0,0,NULL,0,1000);. Sending that from Linux results in the device rebooting and suddenly exposing some more USB endpoints, including a functional adb interface. Although, annoyingly, the rndis interface that enables USB tethering via the modem is now missing.

Unfortunately the adb user is unprivileged, but most files on the system are world-readable. data/logs/atfwd.log is especially interesting. This modem has an application processor built into the modem chipset itself, and while the modem implements the Hayes Command Set there's also a mechanism for userland to register that certain AT commands should be pushed up to userland. These are handled by the atfwd_daemon that runs as root, and conveniently logs everything it's up to. This includes having logged all the communications executed when the update tool was run earlier, so let's dig into that.

The system sends a bunch of AT+SYSCMD= commands, each of which is in the form of echo (stuff) >>/usrdata/sec/chipid. Once that's all done, it sends AT+CHIPID, receives a response of CHIPID:PASS, and then AT+SER=3,1, at which point the modem reboots back into the normal mode - adb is gone, but rndis is back. But the logs also reveal that between the CHIPID request and the response is a security check that involves RSA. The logs on the client side show that the text being written to the chipid file is a single block of base64 encoded data. Decoding it just gives apparently random binary. Heading back to Ghidra shows that atfwd_daemon is reading the chipid file and then decrypting it with an RSA key. The key is obtained by calling a series of functions, each of which returns a long base64-encoded string. Decoding each of these gives 1028 bytes of high entropy data, which is then passed to another function that decrypts it using AES CBC using a key of 000102030405060708090a0b0c0d0e0f and an initialization vector of all 0s. This is somewhat weird, since there's 1028 bytes of data and 128 bit AES works on blocks of 16 bytes. The behaviour of OpenSSL is apparently to just pad the data out to a multiple of 16 bytes, but that implies that we're going to end up with a block of garbage at the end. It turns out not to matter - despite the fact that we decrypt 1028 bytes of input only the first 200 bytes mean anything, with the rest just being garbage. Concatenating all of that together gives us a PKCS#8 private key blob in PEM format. Which means we have not only the private key, but also the public key.

So, what's in the encrypted data, and where did it come from in the first place? It turns out to be a JSON blob that contains the IMEI and the serial number of the modem. This is information that can be read from the modem in the first place, so it's not secret. The modem decrypts it, compares the values in the blob to its own values, and if they match sets a flag indicating that validation has succeeeded. But what encrypted it in the first place? It turns out that the json blob is just POSTed to http://pro.w.ifelman.com/api/encrypt and an encrypted blob returned. Of course, the fact that it's being encrypted on the server with the public key and sent to the modem that decrypted with the private key means that having access to the modem gives us the public key as well, which means we can just encrypt our own blobs.

What does that buy us? Digging through the code shows the only case that it seems to matter is when parsing the AT+SER command. The first argument to this is the serial mode to transition to, and the second is whether this should be a temporary transition or a permanent one. Once parsed, these arguments are passed to /sbin/usb/compositions/switch_usb which just writes the mode out to /usrdata/mode.cfg (if permanent) or /usrdata/mode_tmp.cfg (if temporary). On boot, /data/usb/boot_hsusb_composition reads the number from this file and chooses which USB profile to apply. This requires no special permissions, except if the number is 3 - if so, the RSA verification has to be performed first. This is somewhat strange, since mode 9 gives the same rndis functionality as mode 3, but also still leaves the debug and diagnostic interfaces enabled.

So what's the point of all of this? I'm honestly not sure! It doesn't seem like any sort of effective enforcement mechanism (even ignoring the fact that you can just create your own blobs, if you change the IMEI on the device somehow, you can just POST the new values to the server and get back a new blob), so the best I've been able to come up with is to ensure that there's some mapping between IMEI and serial number before the device can be transitioned into production mode during manufacturing.

But, uh, we can just ignore all of this anyway. Remember that AT+SYSCMD= stuff that was writing the data to /usrdata/sec/chipid in the first place? Anything that's passed to AT+SYSCMD is just executed as root. Which means we can just write a new value (including 3) to /usrdata/mode.cfg in the first place, without needing to jump through any of these hoops. Which also means we can just adb push a shell onto there and then use the AT interface to make it suid root, which avoids needing to figure out how to exploit any of the bugs that are just sitting there given it's running a 3.18.48 kernel.

Anyway, I've now got a modem that's got working USB tethering and also exposes a working adb interface, and I've got root on it. Which let me dump the bootloader and discover that it implements fastboot and has an oem off-mode-charge command which solves the problem I wanted to solve of having the device boot when it gets power again. Unfortunately I still need to get into fastboot mode. I haven't found a way to do it through software (adb reboot bootloader doesn't do anything), but this post suggests it's just a matter of grounding a test pad, at which point I should just be able to run fastboot oem off-mode-charge and it'll be all set. But that's a job for tomorrow.

Edit: Got into fastboot mode and ran fastboot oem off-mode-charge 0 but sadly it doesn't actually do anything, so I guess next is going to involve patching the bootloader binary. Since it's signed with a cert titled "General Use Test Key (for testing only)" it apparently doesn't have secure boot enabled, so this should be easy enough.

comment count unavailable comments

19 November 2022

Joerg Jaspert: From QNAP QTS to TrueNAS Scale

History, Setup So for quite some time I have a QNAP TS-873x here, equipped with 8 Western Digital Red 10 TB disks, plus 2 WD Blue 500G M2 SSDs. The QNAP itself has an AMD Embedded R-Series RX-421MD with 4 cores and was equipped with 48G RAM. Initially I had been quite happy, the system is nice. It was fast, it was easy to get to run and the setup of things I wanted was simple enough. All in a web interface that tries to imitate a kind of workstation feeling and also tries to hide that it is actually a webinterface. Natually with that amount of disks I had a RAID6 for the disks, plus RAID1 for the SSDs. And then configured as a big storage pool with the RAID1 as cache. Below the hood QNAP uses MDADM Raid and LVM (if you want, with thin provisioning), in some form of emdedded linux. The interface allows for regular snapshots of your storage with flexible enough schedules to create them, so it all appears pretty good.

QNAP slow Fast forward some time and it gets annoying. First off you really should have regular raid resyncs scheduled, and while you can set priorities on them and have them low priority, they make the whole system feel very sluggish, quite annoying. And sure, power failure (rare, but can happen) means another full resync run. Also, it appears all of the snapshots are always mounted to some /mnt/snapshot/something place (df on the system gets quite unusable). Second, the reboot times. QNAP seems to be affected by the more features, fuck performance virus, and bloat their OS with more and more features while completly ignoring the performance. Everytime they do an upgrade it feels worse. Lately reboot times went up to 10 to 15 minutes - and then it still hadn t started the virtual machines / docker containers one might run on. Another 5 to 10 minutes for those. Opening the file explorer - ages on calculating what to show. Trying to get the storage setup shown? Go get a coffee, but please fetch the beans directly from the plantation, or you are too fast. Annoying it was. And no, no broken disks or fan or anything, it all checks out fine.

Replace QNAPs QTS system So I started looking around what to do. More RAM may help a little bit, but I already had 48G, the system itself appears to only do 64G maximum, so not much chance of it helping enough. Hardware is all fine and working, so software needs to be changed. Sounds hard, but turns out, it is not.

TrueNAS And I found that multiple people replaced the QNAPs own system with a TrueNAS installation and generally had been happy. Looking further I found that TrueNAS has a variant called Scale - which is based on Debian. Doubly good, that, so I went off checking what I may need for it.

Requirements Heck, that was a step back. To install TrueNAS you need an HDMI out and a disk to put it on. The one that QTS uses is too small, so no option.
QNAPs  internal USB disk QNAPs original internal USB drive, DOM
So either use one of the SSDs that played cache (and should do so again in TrueNAS, or get the QNAP original replaced. HDMI out is simple, get a cheap card and put it into one of the two PCIe-4x slots, done. The disk thing looked more complicated, as QNAP uses some internal usb stick thing . Turns out it is just a USB stick that has an 8+1pin connector. Couldn t find anything nice as replacement, but hey, there are 9-pin to USB-A adapters.
9PIN to USB A a 9pin to USB A adapter
With that adapter, one can take some random M2 SSD and an M2-to-USB case, plus some cabling, and voila, we have a nice system disk.
USB 9pin  to USB-A cable connected to Motherboard and some more cable 9pin adapter to USB-A connected with some more cable
Obviously there isn t a good place to put this SSD case and cable, but the QNAP case is large enough to find space and use some cable ties to store it safely. Space enough to get the cable from the side, where the mainboard is to the place I mounted it, so all fine.
Mounted  SSD in external case, also shows the video card Mounted SSD in its external case
The next best M2 SSD was a Western Digital Red with 500G - and while this is WAY too much for TrueNAS, it works. And hey, only using a tiny fraction? Oh so much more cells available internally to use when others break. Or something Together with the Asus card mounted I was able to install TrueNAS. Which is simple, their installer is easy enough to follow, just make sure to select the right disk to put it on.

Preserving data during the move Switching from QNAP QTS to TrueNAS Scale means changing from MDADM Raid with LVM and ext4 on top to ZFS and as such all data on it gets erased. So a backup first is helpful, and I got myself two external Seagate USB Disks of 6TB each - enough for the data I wanted to keep. Copying things all over took ages, especially as the QNAP backup thingie sucks, it was breaking quite often. Also, for some reason I did not investigate, the performance of it was real bad. It started at a maximum of 50MB/s, but the last terabyte of data was copied at MUCH less than that, and so it took much longer than I anticipated. Copying back was slow too, but much less so. Of course reading things usually is faster than writing, with it going around 100MB/s most of the time, which is quite a bit more - still not what USB3 can actually do, but I guess the AMD chip doesn t want to go that fast.

TrueNAS experience The installation went mostly smooth, the only real trouble had been on my side. Turns out that a bad network cable does NOT help the network setup, who would have thought. Other than that it is the usual set of questions you would expect, a reboot, and then some webinterface. And here the differences start. The whole system boots up much faster. Not even a third of the time compared to QTS. One important thing: As TrueNAS scale is Debian based, and hence a linux kernel, it automatically detects and assembles the old RAID arrays that QTS put on. Which TrueNAS can do nothing with, so it helps to manually stop them and wipe the disks. Afterwards I put ZFS on the disks, with a similar setup to what I had before. The spinning rust are the data disks in a RAIDZ2 setup, the two SSDs are added as cache devices. Unlike MDADM, ZFS does not have a long sync process. Also unlike the MDADM/LVM/EXT4 setup from before, ZFS works different. It manages the raid thing but it also does the volume and filesystem parts. Quite different handling, and I m still getting used to it, so no, I won t write some ZFS introduction now.

Features The two systems can not be compared completly, they are having a pretty different target audience. QNAP is more for the user that wants some network storage that offers a ton of extra features easily available via a clickable interface. While TrueNAS appears more oriented to people that want a fast but reliable storage system. TrueNAS does not offer all the extra bloat the QNAP delivers. Still, you have the ability to run virtual machines and it seems it comes with Rancher, so some kubernetes/container ability is there. It lacks essential features like assigning PCI devices to virtual machines, so is not useful right now, but I assume that will come in a future version. I am still exploring it all, but I like what I have right now. Still rebuilding my setup to have all shares exported and used again, but the most important are working already.

11 September 2022

Shirish Agarwal: Politics, accessibility, books

Politics I have been reading books, both fiction and non-fiction for a long long time. My first book was a comic most probably when I was down with Malaria when I was a kid. I must be around 4-5 years old. Over the years, books have given me great joy and I continue to find nuggets of useful information, both in fiction as well as non-fiction books. So here s to sharing something and how that can lead you to a rabbit hole. This entry would be a bit NSFW as far as language is concerned. NYPD Red 5 by James Patterson First of all, have no clue as to why James Patterson s popularity has been falling. He used to be right there with Lee Child and others, but not so much now. While I try to be mysterious about books, I would give a bit of heads-up so people know what to expect. This is probably more towards the Adult crowd as there is a bit of sex as well as quite a few grey characters. The NYPD Red is a sort of elite police task force that basically is for celebrities. In the book series, they do a lot of ass-kissing (figuratively more than literally). Now the reason I have always liked fiction is that however wild the assumption or presumption is, it does have somewhere a grain of truth. And each and every time I read a book or two, that gets cemented. One of the statements in the book told something about how 9/11 took a lot of police personnel out of the game. First, there were a number of policemen who were patrolling the Two Towers, so they perished literally during the explosion. Then there were policemen who were given the cases to close the cases (bring the cases to conclusion). When you are investigating your own brethren or even civilians who perished 9/11 they must have experienced emotional trauma and no outlet. Mental health even in cops is the same and given similar help as you and me (i.e. next to none.) But both of these were my assumptions. The only statement that was in the book was they lost a lot of bench strength. Even NYFD (New York Fire Department). This led me to me to With Crime At Record Lows, Should NYC Have Fewer Cops? This is more right-wing sentiment and in fact, there have been calls to defund the police. This led me to https://cbcny.org/ and one specific graph. Unfortunately, this tells the story from 2010-2022 but not before. I was looking for data from around 1999 to 2005 because that will tell whether or not it happened. Then I remembered reading in newspapers the year or two later how 9/11 had led NYC to recession. I looked up online and for sure NY was booming before 9/11. One can argue that NYC could come down and that is pretty much possible, everything that goes up comes down, it s a law of nature but it would have been steady rather than abrupt. And once you are in recession, the first thing to go is personnel. So people both from NYPD and NYFD were let go, even though they were needed the most then. As you can see, a single statement in a book can take you to places & time literally. Edit: Addition 11th September There were quite a few people who also died from New York Port Authority and they also lost quite a number of people directly and indirectly and did a lot of patrolling of the water bodies near NYC. Later on, even in their department, there were a lot of early retirements.

Kosovo A couple of days back I had a look at the Debconf 2023 BOF that was done in Kosovo. One of the interesting things that happened during the BOF is when a woman participant chimed in and asks India to recognize Kosovo. Immediately it triggered me and I opened the Kosovo Wikipedia page to get some understanding of the topic. Reading up on it, came to know Russia didn t agree and doesn t recognize Kosovo. Mr. Modi likes Putin and India imports a lot of its oil from Russia. Unrelatedly, but still useful, we rejected to join IPEF. Earlier, we had rejected China s BRI. India has never been as vulnerable as she is now. Our foreign balance has reached record lows. Now India has been importing quite a bit of Russian crude and has been buying arms and ammunition from them. We are also scheduled to buy a couple of warships and submarines etc. We even took arms and ammunition from them on lease. So we can t afford that they are displeased with India. Even though Russia has more than friendly relations with both China and Pakistan. At the same time, the U.S. is back to aiding Pakistan which the mainstream media in India refuses to even cover. And to top all of this, we have the Chip 4 Alliance but that needs its own article, truth be told but we will do with a paragraph  Edit Addition 11th September Seems Kosovo isn t unique in that situation, there are 3-4 states like that. A brief look at worldpopulationreview tells you there are many more.

Chip 4 Alliance For almost a decade I have been screaming about this on my blog as well as everywhere that chip fabrication is a national security thing. And for years, most people deny it. And now we have chip 4 alliance. Now to understand this, you have to understand that China for almost a decade, somewhere around 2014 or so came up with something called the big fund . Now one can argue one way or the other how successful the fund has been, but it has, without doubt, created ripples so strong that the U.S., Taiwan, Japan, and probably South Korea will join and try to stem the tide. Interestingly, in this grouping, South Korea is the weakest in the statements and what they have been saying. Within the group itself, there is a lot of tension and China would use that and there are a number of unresolved issues between the three countries that both China & Russia would exploit. For e.g. the Comfort women between South Korea and Japan. Or the 1985 Accord Agreement between Japan and the U.S. Now people need to understand this, this is not just about China but also about us. If China has 5-6x times India s GDP and their research budget is at the very least 100x times what India spends, how do you think we will be self-reliant? Whom are we fooling? Are we not tired of fooling ourselves  In diplomacy, countries use leverage. Sadly, we let go of some of our most experienced negotiators in 2014 and since then have been singing in the wind

Accessibility, Jitsi, IRC, Element-Desktop The Wikipedia page on Accessibility says the following Accessibility is the design of products, devices, services, vehicles, or environments so as to be usable by people with disabilities. The concept of accessible design and practice of accessible development ensures both direct access (i.e. unassisted) and indirect access meaning compatibility with a person s assistive technology. Now IRC or Internet Relay Chat has been accessible for a long time. I know of even blind people who have been able to navigate IRC quite effortlessly as there has been a lot of work done to make sure all the joints speak to each other so people with one or more disabilities still can use, and contribute without an issue. It does help that IRC and many clients have been there since the 1970s so most of them have had more than enough time to get all the bugs fixed and both text-to-speech and speech-to-text work brilliantly on IRC. Newer software like Jitsi or for that matter Telegram is lacking those features. A few days ago, discovered on Telegram I was shared that Samsung Voice input is also able to do the same. The Samsung Voice Input works wonder as it translates voice to text, I have not yet tried the text-to-speech but perhaps somebody can and they can share whatever the results can be one way or the other. I have tried element-desktop both on the desktop as well as mobile phone and it has been disappointing, to say the least. On the desktop, it is unruly and freezes once in a while, and is buggy. The mobile version is a little better but that s not saying a lot. I prefer the desktop version as I can use the full-size keyboard. The bug I reported has been there since its Riot days. I had put up a bug report even then. All in all, yesterday was disappointing

31 August 2022

Russell Coker: Links Aug 2022

Armor is an interesting technology from Manchester University for stopping rowhammer attacks on DRAM [1]. Unfortunately armor is a term used for DRAM that looks fancy for ricers so finding out whether it s used in production is difficult. The Reckless Limitless Scope of Web Browsers is an insightful analysis of the size of web specs and why it s impossible to implement them properly [2]. Framework is a company that makes laptop kits you can assemble and upgrade, interesting concept [3]. I ll keep buying second hand laptops for less than $400 but if I wanted to spend $1000 then I d consider one of these. FS has an insightful article about why unstructured job interviews (IE the vast majority of job interviews) give a bad result [4]. How a child killer inspired Ayn Rand and indirectly conservatives all around the world [5]. Ayn Rand s love of a notoriously sadistic child killer is well known, but this article has a better discussion of it than most. 60 Minutes had an interesting article on Foreign Accent Syndrome where people suddenly sound like they are from another country [6]. 18 minute video but worth watching. Most Autistic people have experience of people claiming that they must be from another country because of the way they speak. Having differences in brain function lead to differences in perceived accent is nothing new. The IEEE has an interesting article about the creation of the i860, the first million-transistor chip [7]. The Game of Trust is an interactive web site demonstrating the game theory behind trusting other people [8]. Here s a choose your own adventure game in Twitter (Nitter is a non-tracking proxy for Twitter) [9], can you get your pawn elected Emperor of the Holy Roman Empire?

27 August 2022

James Valleroy: FreedomBox Packages in Debian

FreedomBox is a Debian pure blend that reduces the effort needed to run and maintain a small personal server. Being a pure blend means that all of the software packages which are used in FreedomBox are included in Debian. Most of these packages are not specific to FreedomBox: they are common things such as Apache web server, firewalld, slapd (LDAP server), etc. But there are a few packages which are specific to FreedomBox: they are named freedombox, freedombox-doc-en, freedombox-doc-es, freedom-maker, fbx-all and fbx-tasks. freedombox is the core package. You could say, if freedombox is installed, then your system is a FreedomBox (or a derivative). It has dependencies on all of the packages that are needed to get a FreedomBox up and running, such as the previously mentioned Apache, firewalld, and slapd. It also provides a web interface for the initial setup, configuration, and installing apps. (The web interface service is called Plinth and is written in Python using Django framework.) The source package of freedombox also builds freedombox-doc-en and freedombox-doc-es. These packages install the FreedomBox manuals for English and Spanish, respectively. freedom-maker is a tool that is used to build FreedomBox disk images. An image can be copied to a storage device such as a Solid State Disk (SSD), eMMC (internal flash memory chip), or a microSD card. Each image is meant for a particular hardware device (or target device), or a set of devices. In some cases, one image can be used across a wide range of devices. For example, the amd64 image is for all 64-bit x86 architecture machines (including virtual machines). The arm64 image is for all 64-bit ARM machines that support booting a generic image using UEFI. fbx-all and fbx-tasks are special metapackages, both built from a single source package named debian-fbx. They are related to tasksel, a program that displays a curated list of packages that can be installed, organized by interest area. Debian blends typically provide task files to list their relevant applications in tasksel. fbx-tasks only installs the tasks for FreedomBox (without actually installing FreedomBox). fbx-all goes one step further and also installs freedombox itself. In general, FreedomBox users won t need to interact with these two packages. Links:

12 August 2022

Wouter Verhelst: Upgrading a Windows 10 VM to Windows 11

I run Debian on my laptop (obviously); but occasionally, for $DAYJOB, I have some work to do on Windows. In order to do so, I have had a Windows 10 VM in my libvirt configuration that I can use. A while ago, Microsoft issued Windows 11. I recently found out that all the components for running Windows 11 inside a libvirt VM are available, and so I set out to upgrade my VM from Windows 10 to Windows 11. This wasn't as easy as I thought, so here's a bit of a writeup of all the things I ran against, and how I fixed them. Windows 11 has a number of hardware requirements that aren't necessary for Windows 10. There are a number of them, but the most important three are: So let's see about all three.

A modern enough processor If your processor isn't modern enough to run Windows 11, then you can probably forget about it (unless you want to use qemu JIT compilation -- I dunno, probably not going to work, and also not worth it if it were). If it is, all you need is the "host-passthrough" setting in libvirt, which I've been using for a long time now. Since my laptop is less than two months old, that's not a problem for me.

A TPM 2.0 module My Windows 10 VM did not have a TPM configured, because it wasn't needed. Luckily, a quick web search told me that enabling that is not hard. All you need to do is:
  • Install the swtpm and swtpm-tools packages
  • Adding the TPM module, by adding the following XML snippet to your VM configuration:
    <devices>
      <tpm model='tpm-tis'>
        <backend type='emulator' version='2.0'/>
      </tpm>
    </devices>
    
    Alternatively, if you prefer the graphical interface, click on the "Add hardware" button in the VM properties, choose the TPM, set it to Emulated, model TIS, and set its version to 2.0.
You're done! Well, with this part, anyway. Read on.

Secure boot Here is where it gets interesting. My Windows 10 VM was old enough that it was configured for the older i440fx chipset. This one is limited to PCI and IDE, unlike the more modern q35 chipset (which supports PCIe and SATA, and does not support IDE nor SATA in IDE mode). There is a UEFI/Secure Boot-capable BIOS for qemu, but it apparently requires the q35 chipset, Fun fact (which I found out the hard way): Windows stores where its boot partition is somewhere. If you change the hard drive controller from an IDE one to a SATA one, you will get a BSOD at startup. In order to fix that, you need a recovery drive. To create the virtual USB disk, go to the VM properties, click "Add hardware", choose "Storage", choose the USB bus, and then under "Advanced options", select the "Removable" option, so it shows up as a USB stick in the VM. Note: this takes a while to do (took about an hour on my system), and your virtual USB drive needs to be 16G or larger (I used the libvirt default of 20G). There is no possibility, using the buttons in the virt-manager GUI, to convert the machine from i440fx to q35. However, that doesn't mean it's not possible to do so. I found that the easiest way is to use the direct XML editing capabilities in the virt-manager interface; if you edit the XML in an editor it will produce error messages if something doesn't look right and tell you to go and fix it, whereas the virt-manager GUI will actually fix things itself in some cases (and will produce helpful error messages if not). What I did was:
  • Take backups of everything. No, really. If you fuck up, you'll have to start from scratch. I'm not responsible if you do.
  • Go to the Edit->Preferences option in the VM manager, then on the "General" tab, choose "Enable XML editing"
  • Open the Windows VM properties, and in the "Overview" section, go to the "XML" tab.
  • Change the value of the machine attribute of the domain.os.type element, so that it says pc-q35-7.0.
  • Search for the domain.devices.controller element that has pci in its type attribute and pci-root in its model one, and set the model attribute to pcie-root instead.
  • Find all domain.devices.disk.target elements, setting their dev=hdX to dev=sdX, and bus="ide" to bus="sata"
  • Find the USB controller (domain.devices.controller with type="usb", and set its model to qemu-xhci. You may also want to add ports="15" if you didn't have that yet.
  • Perhaps also add a few PCIe root ports:
    <controller type="pci" index="1" model="pcie-root-port"/>
    <controller type="pci" index="2" model="pcie-root-port"/>
    <controller type="pci" index="3" model="pcie-root-port"/>
    
I figured out most of this by starting the process for creating a new VM, on the last page of the wizard that pops up selecting the "Modify configuration before installation" option, going to the "XML" tab on the "Overview" section of the new window that shows up, and then comparing that against what my current VM had. Also, it took me a while to get this right, so I might have forgotten something. If virt-manager gives you an error when you hit the Apply button, compare notes against the VM that you're in the process of creating, and copy/paste things from there to the old VM to make the errors go away. As long as you don't remove configuration that is critical for things to start, this shouldn't break matters permanently (but hey, use your backups if you do break -- you have backups, right?) OK, cool, so now we have a Windows VM that is... unable to boot. Remember what I said about Windows storing where the controller is? Yeah, there you go. Boot from the virtual USB disk that you created above, and select the "Fix the boot" option in the menu. That will fix it. Ha ha, only kidding. Of course it doesn't. I honestly can't tell you everything that I fiddled with, but I think the bit that eventually fixed it was where I chose "safe mode", which caused the system to do a hickup, a regular reboot, and then suddenly everything was working again. Meh. Don't throw the virtual USB disk away yet, you'll still need it. Anyway, once you have it booting again, you will now have a machine that theoretically supports Secure Boot, but you're still running off an MBR partition. I found a procedure on how to convert things from MBR to GPT that was written almost 10 years ago, but surprisingly it still works, except for the bit where the procedure suggests you use diskmgmt.msc (for one thing, that was renamed; and for another, it can't touch the partition table of the system disk either). The last step in that procedure says to restart your computer!, which is fine, except at this point you obviously need to switch over to the TianoCore firmware, otherwise you're trying to read a UEFI boot configuration on a system that only supports MBR booting, which obviously won't work. In order to do that, you need to add a loader element to the domain.os element of your libvirt configuration:
<loader readonly="yes" type="pflash">/usr/share/OVMF/OVMF_CODE_4M.ms.fd</loader>
When you do this, you'll note that virt-manager automatically adds an nvram element. That's fine, let it. I figured this out by looking at the documentation for enabling Secure Boot in a VM on the Debian wiki, and using the same trick as for how to switch chipsets that I explained above. Okay, yay, so now secure boot is enabled, and we can install Windows 11! All good? Well, almost. I found that once I enabled secure boot, my display reverted to a 1024x768 screen. This turned out to be because I was using older unsigned drivers, and since we're using Secure Boot, that's no longer allowed, which means Windows reverts to the default VGA driver, and that only supports the 1024x768 resolution. Yeah, I know. The solution is to download the virtio-win ISO from one of the links in the virtio-win github project, connecting it to the VM, going to Device manager, selecting the display controller, clicking on the "Update driver" button, telling the system that you have the driver on your computer, browsing to the CD-ROM drive, clicking the "include subdirectories" option, and then tell Windows to do its thing. While there, it might be good to do the same thing for unrecognized devices in the device manager, if any. So, all I have to do next is to get used to the completely different user interface of Windows 11. Sigh. Oh, and to rename the "w10" VM to "w11", or some such. Maybe.

18 July 2022

Bits from Debian: DebConf22 welcomes its sponsors!

DebConf22 is taking place in Prizren, Kosovo, from 17th to 24th July, 2022. It is the 23rd edition of the Debian conference and organizers are working hard to create another interesting and fruitful event for attendees. We would like to warmly welcome the sponsors of DebConf22, and introduce you to them. We have four Platinum sponsors. Our first Platinum sponsor is Lenovo. As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world. Infomaniak is our second Platinum sponsor. Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware). The ITP Prizren is our third Platinum sponsor. ITP Prizren intends to be a changing and boosting element in the area of ICT, agro-food and creatives industries, through the creation and management of a favourable environment and efficient services for SMEs, exploiting different kinds of innovations that can contribute to Kosovo to improve its level of development in industry and research, bringing benefits to the economy and society of the country as a whole. Google is our fourth Platinum sponsor. Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware. Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform. Our Gold sponsors are: Roche, a major international pharmaceutical provider and research company dedicated to personalized healthcare. Microsoft, enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more. Ipko Telecommunications, provides telecommunication services and it is the first and the most dominant mobile operator which offers fast-speed mobile internet 3G and 4G networks in Kosovo. Ubuntu, the Operating System delivered by Canonical. U.S. Agency for International Development, leads international development and humanitarian efforts to save lives, reduce poverty, strengthen democratic governance and help people progress beyond assistance. Our Silver sponsors are: Pexip, is the video communications platform that solves the needs of large organizations. Deepin is a Chinese commercial company focusing on the development and service of Linux-based operating systems. Hudson River Trading, a company researching and developing automated trading algorithms using advanced mathematical techniques. Amazon Web Services (AWS), is one of the world's most comprehensive and broadly adopted cloud platforms, offering over 175 fully featured services from data centers globally. The Bern University of Applied Sciences with near 7,800 students enrolled, located in the Swiss capital. credativ, a service-oriented company focusing on open-source software and also a Debian development partner. Collabora, a global consultancy delivering Open Source software solutions to the commercial world. Arm: with the world s Best SoC Design Portfolio, Arm powered solutions have been supporting innovation for more than 30 years and are deployed in over 225 billion chips to date. GitLab, an open source end-to-end software development platform with built-in version control, issue tracking, code review, CI/CD, and more. Two Sigma, rigorous inquiry, data analysis, and invention to help solve the toughest challenges across financial services. Starlabs, builds software experiences and focus on building teams that deliver creative Tech Solutions for our clients. Solaborate, has the world s most integrated and powerful virtual care delivery platform. Civil Infrastructure Platform, a collaborative project hosted by the Linux Foundation, establishing an open source base layer of industrial grade software. Matanel Foundation, operates in Israel, as its first concern is to preserve the cohesion of a society and a nation plagued by divisions. Bronze sponsors: bevuta IT, Kutia, Univention, Freexian. And finally, our Supporter level sponsors: Altus Metrum, Linux Professional Institute, Olimex, Trembelat, Makerspace IC Prizren, Cloud68.co, Gandi.net, ISG.EE, IPKO Foundation, The Deutsche Gesellschaft f r Internationale Zusammenarbeit (GIZ) GmbH. Thanks to all our sponsors for their support! Their contributions make it possible for a large number of Debian contributors from all over the globe to work together, help and learn from each other in DebConf22. DebConf22 logo

21 June 2022

John Goerzen: Lessons of Social Media from BBSs

In the recent article The Internet Origin Story You Know Is Wrong, I was somewhat surprised to see the argument that BBSs are a part of the Internet origin story that is often omitted. Surprised because I was there for BBSs, and even ran one, and didn t really consider them part of the Internet story myself. I even recently enjoyed a great BBS documentary and still didn t think of the connection on this way. But I think the argument is a compelling one.
In truth, the histories of Arpanet and BBS networks were interwoven socially and materially as ideas, technologies, and people flowed between them. The history of the internet could be a thrilling tale inclusive of many thousands of networks, big and small, urban and rural, commercial and voluntary. Instead, it is repeatedly reduced to the story of the singular Arpanet.
Kevin Driscoll goes on to highlight the social aspects of the modem world , how BBSs and online services like AOL and CompuServe were ways for people to connect. And yet, AOL members couldn t easily converse with CompuServe members, and vice-versa. Sound familiar?
Today s social media ecosystem functions more like the modem world of the late 1980s and early 1990s than like the open social web of the early 21st century. It is an archipelago of proprietary platforms, imperfectly connected at their borders. Any gateways that do exist are subject to change at a moment s notice. Worse, users have little recourse, the platforms shirk accountability, and states are hesitant to intervene.
Yes, it does. As he adds, People aren t the problem. The problem is the platforms. A thought-provoking article, and I think I ll need to buy the book it s excerpted from!

21 May 2022

Dirk Eddelbuettel: #37: Introducing r2u with 2 x 19k CRAN binaries for Ubuntu 22.04 and 20.04

One month ago I started work on a new side project which is now up and running, and deserving on an introductory blog post: r2u. It was announced in two earlier tweets (first, second) which contained the two (wicked) demos below also found at the documentation site. So what is this about? It brings full and complete CRAN installability to Ubuntu LTS, both the focal release 20.04 and the recent jammy release 22.04. It is unique in resolving all R and CRAN packages with the system package manager. So whenever you install something it is guaranteed to run as its dependencies are resolved and co-installed as needed. Equally important, no shared library will be updated or removed by the system as the possible dependency of the R package is known and declared. No other package management system for R does that as only apt on Debian or Ubuntu can and this project integrates all CRAN packages (plus 200+ BioConductor packages). It will work with any Ubuntu installation on laptop, desktop, server, cloud, container, or in WSL2 (but is limited to Intel/AMD chips, sorry Raspberry Pi or M1 laptop). It covers all of CRAN (or nearly 19k packages), all the BioConductor packages depended-upon (currently over 200), and only excludes less than a handful of CRAN packages that cannot be built.

Usage Setup instructions approaches described concisely in the repo README.md and documentation site. It consists of just five (or fewer) simple steps, and scripts are provided too for focal (20.04) and jammy (22.04).

Demos Check out these two demos (also at the r2u site):

Installing the full tidyverse in one command and 18 seconds

Installing brms and its depends in one command and 13 seconds (and show gitpod.io)

Integration via bspm The r2u setup can be used directly with apt (or dpkg or any other frontend to the package management system). Once installed apt update; apt upgrade will take care of new packages. For this to work, all CRAN packages (and all BioConductor packages depended upon) are mapped to names like r-cran-rcpp and r-bioc-s4vectors: an r prefix, the repo, and the package name, all lower-cased. That works but thanks to the wonderful bspm package by I aki car we can do much better. It connects R s own install.packages() and update.packages() to apt. So we can just say (as the demos above show) install.packages("tidyverse") or install.packages("brms") and binaries are installed via apt which is fantastic and it connects R to the system package manager. The setup is really only two lines and described at the r2u site as part of the setup.

History and Motivation Turning CRAN packages into .deb binaries is not a new idea. Albrecht Gebhardt was the first to realize this about twenty years ago (!!) and implemented it with a single Perl script. Next, Albrecht, Stefan Moeller, David Vernazobres and I built on top of this which is described in this useR! 2007 paper. A most excellent generalization and rewrite was provided by Charles Blundell in an superb Google Summer of Code contribution in 2008 which I mentored. Charles and I described it in this talk at useR! 2009. I ran that setup for a while afterwards, but it died via an internal database corruption in 2010 right when I tried to demo it at CRAN headquarters in Vienna. This peaked at, if memory serves, about 5k packages: all of CRAN at the time. Don Armstrong took it one step further in a full reimplemenation which, if I recall correctly, coverd all of CRAN and BioConductor for what may have been 8k or 9k packages. Don had a stronger system (with full RAID-5) but it also died in a crash and was never rebuilt even though he and I could have relied on Debian resources (as all these approaches focused on Debian). During that time, Michael Rutter created a variant that cleverly used an Ubuntu-only setup utilizing Launchpad. This repo is still going strong, used and relied-upon by many, and about 5k packages (per distribution) strong. At one point, a group consisting of Don, Michael, G bor Cs rdi and myself (as lead/PI) had financial support from the RConsortium ISC for a more general re-implementation , but that support was withdrawn when we did not have time to deliver. We should also note other long-standing approaches. Detlef Steuer has been using the openSUSE Build Service to provide nearly all of CRAN for openSUSE for many years. I aki car built a similar system for Fedora described in this blog post. I aki and I also have a arXiv paper describing all this.

Details Please see the the r2u site for all details on using r2u.

Acknowledgements The help of everybody who has worked on this is greatly appreciated. So a huge Thank you! to Albrecht, David, Stefan, Charles, Don, Michael, Detlef, G bor, I aki and whoever I may have omitted. Similarly, thanks to everybody working on R, CRAN, Debian, or Ubuntu it all makes for a superb system. And another big Thank you! goes to my GitHub sponsors whose continued support is greatly appreciated.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

21 April 2022

Andy Simpkins: Firmware and Debian

There has been a flurry of activity on the Debian mailing lists ever since Steve McIntyre raised the issue of including non-free firmware as part of official Debian installation images. Firstly I should point out that I am in complete agreement with Steve s proposal to include non-free firmware as part of an installation image. Likewise I think that we should have a separate archive section for firmware. Because without doing so it will soon become almost impossible to install onto any new hardware. However, as always the issue is more nuanced than a first glance would suggest. Lets start by defining what is firmware? Firmware is any software that runs outside the orchestration of the operating system. Typically firmware will be executed on a processor(s) separate from the processor(s) running the OS, but this does not need to be the case. As Debian we are content that our systems can operate using fully free and open source software and firmware. We can install our OS without needing any non-free firmware. This is an illusion! Each and every PC platform contains non-free firmware It may be possible to run free firmware on some Graphics controllers, Wi-Fi chip-sets, or Ethernet cards and we can (and perhaps should) choose to spend our money on systems where this is the case. When installing a new system we might still be forced to hold our nose and install with non-free firmware on the peripheral before we are able to upgrade it to FLOSS firmware later if this is exists or is even possible to do so. However after the installation we are running a full FLOSS system in terms of software and firmware. We all (almost without exception) are running propitiatory firmware whether we like it or not. Even after carefully selecting graphics and network hardware with FLOSS firmware options we still haven t escaped from non-free-firmware. Other peripherals contain firmware too each keyboard, disk (SSDs and Spinning rust). Even your USB memory stick that you use to contain the Debian installation image contains a microcontroller and hence also contains firmware that runs on it.
  1. Much of this firmware can not even be updated.
  2. Some can be updated, but is stored in FLASH ROM and the hardware vendor has defeated all programming methods (possibly circumnavigated with a hardware mod).
  3. Some of it can be updated but requires external device programmers (and often the programming connections are a series of test points dotted around the board and not on a header in order to make programming as difficult as possible).
  4. Sometimes the firmware can be updated from within the host operating system (i.e. Debian)
  5. Sometimes, as Steve pointed out in his post, the hardware vendor has enough firmware on a peripheral to perform basic functions perhaps enough to install the OS, but requires additional firmware to enable specific feature (i.e. higher screen resolutions, hardware accelerated functions etc.)
  6. Finally some vendors don t even bother with any non-volatile storage beyond a basic boot loader and firmware must be loaded before the device can be used in any mode.
What about the motherboard? If we are lucky we might be able to run a FLOSS implementation of the UEFI subsystem (edk2/tianocore for example), indeed the non AMD64/i386 platforms based around ARM, MIPS architectures are often the most free when it comes to firmware. What about the microcode on the processor? Personally I wasn t aware that that this was updatable firmware until the Spectre and Meltdown classes of vulnerabilities arose a few years back. So back to Debian images including non-free firmware. This is specifically to address the last two use cases mentioned above, i.e. where firmware needs to be loaded to achieve a minimum functioning of a device. Although it could also include motherboard support, and microcode as well. As far as I can tell the proposal exists for several reasons: #1 Because some freely distributable firmware is required for more and more devices, in order to install Debian, or because whilst Debian can be installed a desktop environment can not be started or fully function #2 Because frankly it is less work to produce, test and maintain fewer installation images As someone who performs tests on our images, this clearly gets my vote :-) and perhaps most important of all.. #3 Because our least experienced users, and new users will download an official image and give up if things don t just work TM Steve s proposal option 5 would address theses issues and I fully support it. I would love to see separate repositories for firmware and firmware-none free. Additionally to accompany firmware non-free I would like to have information on what the firmware actually does. Can I run my hardware without it, what function(s) are limited without the firmware, better yet is there a FLOSS equivalent that I can load instead? Is this something that we can present in Debian installer? I would love not to require non-free firmware, but if I can t, I would love if DI would enable a user to make an informed choice as to what, if any, firmware is installed. Should we be requesting (requiring?) this information for any non-free firmware image that we carry in the archive? Finally lets consider firmware in the wider general case, not just the case where we need to load firmware from within Debian each and every boot. Personally I am annoyed whenever a hardware manufacturer has gone out of their way to prevent firmware updates. Lets face it software contains bugs, and we can assume that the software making up a firmware image will as well. Critical (security) vulnerabilities found in firmware, especially if this runs on the same processor(s) as the OS can impact on the wider system, not just the device itself. This will mean that, without updatable firmware, the hardware itself should be withdrawn from use whilst it would otherwise still function. By preventing firmware updates vendors are forcing early obsolescence in the hardware they sell, perhaps good for their bottom line, but certainly no good for users or the environment. Here I can practice what I preach. As an Electronic Engineer / Systems architect I have been beating the drum for In System Updatable firmware for ALL programmable devices in a system, be it a simple peripheral or a deeply embedded system. I can honestly say that over the last 20 years (yes I have been banging this particular drum for that long) I have had 100% success in arguing this case commercially. Having device programmers in R&D departments is one thing, but that is additional cost for production, and field service. Needing custom programming headers or even a bed of nails fixture to connect your target device to a programmer is more trouble than it is worth. Finally, the ability to update firmware in the field means that you can launch your product on schedule, make a sale and ship to a customer even if the first thing that you need to do is download an update. Offering that to any project manager will make you very popular indeed. So what if this firmware is non-free? As long as the firmware resides in non-volatile media without needing the OS to interact with it, we as a project don t need to carry it in our archives. And we as principled individuals can vote with our feet and wallets by choosing to purchase devices that have free firmware. But where that isn t an option, I ll take updatable but non-free firmware over non-free firmware that can not be updated any day of the week. Sure, the manufacture can choose to no longer support the firmware, and it is shocking how soon this happens often in the consumer market, the manufacture has withdrawn support for a product before it even reaches the end user (In which case we should boycott that manufacture in future until they either change their ways of go bust). But again if firmware can be updated in system that would at least allow the possibility of open firmware to arise. Indeed the only commercial case I have seen to argue against updatable firmware has been either for DRM, in which case good lets get rid of both, or for RF licence compliance, and even then it is tenuous because in this case the manufacture wants ISP for its own use right up until a device is shipped out the door, typically achived by blowing one time programmable fuse links .

14 April 2022

Reproducible Builds: Supporter spotlight: Amateur Radio Digital Communications (ARDC)

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about the project and the work that we do. This is the third instalment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at contact@reproducible-builds.org. We started this series by featuring the Civil Infrastructure Platform project and followed this up with a post about the Ford Foundation. Today, however, we ll be talking with Dan Romanchik, Communications Manager at Amateur Radio Digital Communications (ARDC).
Chris Lamb: Hey Dan, it s nice to meet you! So, for someone who has not heard of Amateur Radio Digital Communications (ARDC) before, could you tell us what your foundation is about? Dan: Sure! ARDC s mission is to support, promote, and enhance experimentation, education, development, open access, and innovation in amateur radio, digital communication, and information and communication science and technology. We fulfill that mission in two ways:
  1. We administer an allocation of IP addresses that we call 44Net. These IP addresses (in the 44.0.0.0/8 IP range) can only be used for amateur radio applications and experimentation.
  2. We make grants to organizations whose work aligns with our mission. This includes amateur radio clubs as well as other amateur radio-related organizations and activities. Additionally, we support scholarship programs for people who either have an amateur radio license or are pursuing careers in technology, STEM education and open-source software development projects that fit our mission, such as Reproducible Builds.

Chris: How might you relate the importance of amateur radio and similar technologies to someone who is non-technical? Dan: Amateur radio is important in a number of ways. First of all, amateur radio is a public service. In fact, the legal name for amateur radio is the Amateur Radio Service, and one of the primary reasons that amateur radio exists is to provide emergency and public service communications. All over the world, amateur radio operators are prepared to step up and provide emergency communications when disaster strikes or to provide communications for events such as marathons or bicycle tours. Second, amateur radio is important because it helps advance the state of the art. By experimenting with different circuits and communications techniques, amateurs have made significant contributions to communications science and technology. Third, amateur radio plays a big part in technical education. It enables students to experiment with wireless technologies and electronics in ways that aren t possible without a license. Amateur radio has historically been a gateway for young people interested in pursuing a career in engineering or science, such as network or electrical engineering. Fourth and this point is a little less obvious than the first three amateur radio is a way to enhance international goodwill and community. Radio knows no boundaries, of course, and amateurs are therefore ambassadors for their country, reaching out to all around the world. Beyond amateur radio, ARDC also supports and promotes research and innovation in the broader field of digital communication and information and communication science and technology. Information and communication technology plays a big part in our lives, be it for business, education, or personal communications. For example, think of the impact that cell phones have had on our culture. The challenge is that much of this work is proprietary and owned by large corporations. By focusing on open source work in this area, we help open the door to innovation outside of the corporate landscape, which is important to overall technological resiliency.
Chris: Could you briefly outline the history of ARDC? Dan: Nearly forty years ago, a group of visionary hams saw the future possibilities of what was to become the internet and requested an address allocation from the Internet Assigned Numbers Authority (IANA). That allocation included more than sixteen million IPv4 addresses, 44.0.0.0 through 44.255.255.255. These addresses have been used exclusively for amateur radio applications and experimentation with digital communications techniques ever since. In 2011, the informal group of hams administering these addresses incorporated as a nonprofit corporation, Amateur Radio Digital Communications (ARDC). ARDC is recognized by IANA, ARIN and the other Internet Registries as the sole owner of these addresses, which are also known as AMPRNet or 44Net. Over the years, ARDC has assigned addresses to thousands of hams on a long-term loan (essentially acting as a zero-cost lease), allowing them to experiment with digital communications technology. Using these IP addresses, hams have carried out some very interesting and worthwhile research projects and developed practical applications, including TCP/IP connectivity via radio links, digital voice, telemetry and repeater linking. Even so, the amateur radio community never used much more than half the available addresses, and today, less than one third of the address space is assigned and in use. This is one of the reasons that ARDC, in 2019, decided to sell one quarter of the address space (or approximately 4 million IP addresses) and establish an endowment with the proceeds. This endowment now funds ARDC s a suite of grants, including scholarships, research projects, and of course amateur radio projects. Initially, ARDC was restricted to awarding grants to organizations in the United States, but is now able to provide funds to organizations around the world.
Chris: How does the Reproducible Builds effort help ARDC achieve its goals? Dan: Our aspirational goals include: We think that the Reproducible Builds efforts in helping to ensure the safety and security of open source software closely align with those goals.
Chris: Are there any specific success stories that ARDC is particularly proud of? Dan: We are really proud of our grant to the Hoopa Valley Tribe in California. With a population of nearly 2,100, their reservation is the largest in California. Like everywhere else, the COVID-19 pandemic hit the reservation hard, and the lack of broadband internet access meant that 130 children on the reservation were unable to attend school remotely. The ARDC grant allowed the tribe to address the immediate broadband needs in the Hoopa Valley, as well as encourage the use of amateur radio and other two-way communications on the reservation. The first nation was able to deploy a network that provides broadband access to approximately 90% of the residents in the valley. And, in addition to bringing remote education to those 130 children, the Hoopa now use the network for remote medical monitoring and consultation, adult education, and other applications. Other successes include our grants to:
Chris: ARDC supports a number of other existing projects and initiatives, not all of them in the open source world. How much do you feel being a part of the broader free culture movement helps you achieve your aims? Dan: In general, we find it challenging that most digital communications technology is proprietary and closed-source. It s part of our mission to fund open source alternatives. Without them, we are solely reliant, as a society, on corporate interests for our digital communication needs. It makes us vulnerable and it puts us at risk of increased surveillance. Thus, ARDC supports open source software wherever possible, and our grantees must make a commitment to share their work under an open source license or otherwise make it as freely available as possible.
Chris: Thanks so much for taking the time to talk to us today. Now, if someone wanted to know more about ARDC or to get involved, where might they go to look? To learn more about ARDC in general, please visit our website at https://www.ampr.org. To learn more about 44Net, go to https://wiki.ampr.org/wiki/Main_Page. And, finally, to learn more about our grants program, go to https://www.ampr.org/apply/

For more about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

5 April 2022

Kees Cook: security things in Linux v5.10

Previously: v5.9 Linux v5.10 was released in December, 2020. Here s my summary of various security things that I found interesting: AMD SEV-ES
While guest VM memory encryption with AMD SEV has been supported for a while, Joerg Roedel, Thomas Lendacky, and others added register state encryption (SEV-ES). This means it s even harder for a VM host to reconstruct a guest VM s state. x86 static calls
Josh Poimboeuf and Peter Zijlstra implemented static calls for x86, which operates very similarly to the static branch infrastructure in the kernel. With static branches, an if/else choice can be hard-coded, instead of being run-time evaluated every time. Such branches can be updated too (the kernel just rewrites the code to switch around the branch ). All these principles apply to static calls as well, but they re for replacing indirect function calls (i.e. a call through a function pointer) with a direct call (i.e. a hard-coded call address). This eliminates the need for Spectre mitigations (e.g. RETPOLINE) for these indirect calls, and avoids a memory lookup for the pointer. For hot-path code (like the scheduler), this has a measurable performance impact. It also serves as a kind of Control Flow Integrity implementation: an indirect call got removed, and the potential destinations have been explicitly identified at compile-time. network RNG improvements
In an effort to improve the pseudo-random number generator used by the network subsystem (for things like port numbers and packet sequence numbers), Linux s home-grown pRNG has been replaced by the SipHash round function, and perturbed by (hopefully) hard-to-predict internal kernel states. This should make it very hard to brute force the internal state of the pRNG and make predictions about future random numbers just from examining network traffic. Similarly, ICMP s global rate limiter was adjusted to avoid leaking details of network state, as a start to fixing recent DNS Cache Poisoning attacks. SafeSetID handles GID
Thomas Cedeno improved the SafeSetID LSM to handle group IDs (which required teaching the kernel about which syscalls were actually performing setgid.) Like the earlier setuid policy, this lets the system owner define an explicit list of allowed group ID transitions under CAP_SETGID (instead of to just any group), providing a way to keep the power of granting this capability much more limited. (This isn t complete yet, though, since handling setgroups() is still needed.) improve kernel s internal checking of file contents
The kernel provides LSMs (like the Integrity subsystem) with details about files as they re loaded. (For example, loading modules, new kernel images for kexec, and firmware.) There wasn t very good coverage for cases where the contents were coming from things that weren t files. To deal with this, new hooks were added that allow the LSMs to introspect the contents directly, and to do partial reads. This will give the LSMs much finer grain visibility into these kinds of operations. set_fs removal continues
With the earlier work landed to free the core kernel code from set_fs(), Christoph Hellwig made it possible for set_fs() to be optional for an architecture. Subsequently, he then removed set_fs() entirely for x86, riscv, and powerpc. These architectures will now be free from the entire class of kernel address limit attacks that only needed to corrupt a single value in struct thead_info. sysfs_emit() replaces sprintf() in /sys
Joe Perches tackled one of the most common bug classes with sprintf() and snprintf() in /sys handlers by creating a new helper, sysfs_emit(). This will handle the cases where kernel code was not correctly dealing with the length results from sprintf() calls, which might lead to buffer overflows in the PAGE_SIZE buffer that /sys handlers operate on. With the helper in place, it was possible to start the refactoring of the many sprintf() callers. nosymfollow mount option
Mattias Nissler and Ross Zwisler implemented the nosymfollow mount option. This entirely disables symlink resolution for the given filesystem, similar to other mount options where noexec disallows execve(), nosuid disallows setid bits, and nodev disallows device files. Quoting the patch, it is useful as a defensive measure for systems that need to deal with untrusted file systems in privileged contexts. (i.e. for when /proc/sys/fs/protected_symlinks isn t a big enough hammer.) Chrome OS uses this option for its stateful filesystem, as symlink traversal as been a common attack-persistence vector. ARMv8.5 Memory Tagging Extension support
Vincenzo Frascino added support to arm64 for the coming Memory Tagging Extension, which will be available for ARMv8.5 and later chips. It provides 4 bits of tags (covering multiples of 16 byte spans of the address space). This is enough to deterministically eliminate all linear heap buffer overflow flaws (1 tag for free , and then rotate even values and odd values for neighboring allocations), which is probably one of the most common bugs being currently exploited. It also makes use-after-free and over/under indexing much more difficult for attackers (but still possible if the target s tag bits can be exposed). Maybe some day we can switch to 128 bit virtual memory addresses and have fully versioned allocations. But for now, 16 tag values is better than none, though we do still need to wait for anyone to actually be shipping ARMv8.5 hardware. fixes for flaws found by UBSAN
The work to make UBSAN generally usable under syzkaller continues to bear fruit, with various fixes all over the kernel for stuff like shift-out-of-bounds, divide-by-zero, and integer overflow. Seeing these kinds of patches land reinforces the the rationale of shifting the burden of these kinds of checks to the toolchain: these run-time bugs continue to pop up. flexible array conversions
The work on flexible array conversions continues. Gustavo A. R. Silva and others continued to grind on the conversions, getting the kernel ever closer to being able to enable the -Warray-bounds compiler flag and clear the path for saner bounds checking of array indexes and memcpy() usage. That s it for now! Please let me know if you think anything else needs some attention. Next up is Linux v5.11.

2022, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

12 March 2022

Petter Reinholdtsen: Publish Hargassner wood chip boiler state to MQTT

Recently I had a look at a Hargassner wood chip boiler, and what kind of free software can be used to monitor and control it. The boiler can be connected to some cloud service via what the producer call an Internet Gateway, which seem to be a computer connecting to the boiler and passing the information gathered to the cloud. I discovered the boiler controller got an IP address on the local network and listen on TCP port 23 to provide status information as a text line of numbers. It also provide a HTTP server listening on port 80, but I have not yet figured out what it can do beside return an error code. If I am to believe various free software implementations talking to such boiler, the interpretation of the line of numbers differ between type of boiler and software version on the boiler. By comparing the list of numbers on the front panel of the boiler with the numbers returned via TCP, I have been able to figure out several of the numbers, but there are a lot left to understand. I've located several temperature measurements and hours running values, as well as oxygen measurements and counters. I decided to write a simple parser in Python for the values I figured out so far, and a simple MQTT injector publishing both the interpreted and the unknown values on a MQTT bus to make collecting and graphing simpler. The end result is available from the hargassner2mqtt project page on gitlab. I very much welcome patches extending the parser to understand more values, boiler types and software versions. I do not really expect very few free software developers got their hands on such unit to experiment, but it would be fun if others too find this project useful. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

13 February 2022

Gunnar Wolf: Got to boot a RPi Zero 2 W with Debian

About a month ago, I got tired of waiting for the newest member of the Raspberry product lineup to be sold in Mexico, and I bought it from a Chinese reseller through a big online shopping platform. I paid quite a bit of premium (~US$85 instead of the advertised US$15), and got it delivered ten days later Anyway, it s known this machine does not yet boot mainline Linux. The vast majority of ARM systems require the bootloader to load a Device Tree file, presenting the hardware characteristics map. And while the RPi Zero 2 W (hey what an awful and confusing naming scheme they chose!) is mostly similar to a RPi3B+, it is not quite the same thing. A kernel with RPi3B+ s device tree will refuse to boot. Anyway, I started digging, and found that some days ago Stephan Wahren sent a patch to the linux-arm-kernel mailing list with a matching device tree. Read the patch! It s quite simple to read (what is harder is to know where each declaration should go, if you want to write your own, of course). It basically includes all basic details for the main chip in the RPi3 family (BCM2837), pulls in also the declarations from the BCM2836 present in the RPi2, and adds the necessary bits for the USB OTG connection and the WiFi and Bluetooth declarations. Registers the model name as Raspberry Pi Zero 2 W, which you can easily see in the following photo, informs the kernel it has 512MB RAM, and Well, really, it s an easy device tree to read, don t be shy! So, I booted my RPi 3B+ with a freshly downloaded Bookworm image, installed and unpacked linux-source-5.15, applied Stephan s patch, and added the following for the DTB to be generated in the arm64 tree as well:
--- /dev/null   2022-01-26 23:35:40.747999998 +0000
+++ arch/arm64/boot/dts/broadcom/bcm2837-rpi-zero-2-w.dts       2022-02-13 06:28:29.968429953 +0000
@@ -0,0 +1 @@
+#include "arm/bcm2837-rpi-zero-2-w.dts"
Then, ran a simple make dtbs, and Failed, because bcm283x-rpi-wifi-bt.dtsi is not yet in the kernel . OK, no worries: Getting wireless to work is a later step. I commented out the lines causing conflict (10, 33-35, 134-136), and:
root@rpi3-20220212:/usr/src/linux-source-5.15# make dtbs
  DTC     arch/arm64/boot/dts/broadcom/bcm2837-rpi-zero-2-w.dtb
Great! Just copied over that generated file to /boot/firmware/, moved the SD over to my RPiZ2W, and behold! It boots! When I bragged about it in #debian-raspberrypi, steev suggested me to pull in the WiFi patch, that has also been submitted (but not yet accepted) for kernel inclusion. I did so, uncommented the lines I modified, and built again. It builds correctly, and again copied the DTB over. It still does not find the WiFi; dmesg still complains about a missing bit of firmware (failed to load brcm/brcmfmac43430b0-sdio.raspberrypi,model-zero-2-w.bin). Steev pointed out it can be downloaded from RPi Distro s GitHub page, but I called it a night and didn t pursue it any further ;-) So I understand this post is still a far cry from saying our images properly boot under a RPi 0 2 W , but we will get there

23 January 2022

Louis-Philippe V ronneau: Goodbye Nexus 5

I've blogged a few times already about my Nexus 5, the Android device I have/had been using for 8 years. Sadly, it died a few weeks ago, when the WiFi chip stopped working. I could probably have attempted a mainboard swap, but at this point, getting a new device seemed like the best choice. In a world where most Android devices are EOL after less than 3 years, it is amazing I was able to keep this device for so long, always running the latest Android version with the latest security patch. The Nexus 5 originally shipped with Android 4.4 and when it broke, I was running Android 11, with the November security patch! I'm very grateful to the FOSS Android community that made this possible, especially the LineageOS community. I've replaced my Nexus 5 by a used Pixel 3a, mostly because of the similar form factor, relatively affordable price and the presence of a headphone jack. Google also makes flashing a custom ROM easy, although I had more trouble with this than I first expected. The first Pixel 3a I bought on eBay was a scam: I ordered an "Open Box" phone and it arrived all scratched1 and with a broken rear camera. The second one I got (from the Amazon Renewed program) arrived in perfect condition, but happened to be a Verizon model. As I found out, Verizon locks the bootloader on their phones, making it impossible to install LineageOS2. The vendor was kind enough to let me return it. As they say, third time's the charm. This time around, I explicitly bought a phone on eBay listed with a unlocked bootloader. I'm very satisfied with my purchase, but all in all, dealing with all the returns and the shipping was exhausting. Hopefully this phone will last as long as my Nexus 5!

  1. There was literally a whole layer missing at the back, as if someone had sanded the phone...
  2. Apparently, and "Unlocked phone" means it is "SIM unlocked", i.e. you can use it with any carrier. What I should have been looking for is a "Factory Unlocked phone", one where the bootloader isn't locked :L

17 January 2022

Matthew Garrett: Boot Guard and PSB have user-hostile defaults

Compromising an OS without it being detectable is hard. Modern operating systems support the imposition of a security policy or the launch of some sort of monitoring agent sufficient early in boot that even if you compromise the OS, you're probably going to have left some sort of detectable trace[1]. You can avoid this by attacking the lower layers - if you compromise the bootloader then it can just hotpatch a backdoor into the kernel before executing it, for instance.

This is avoided via one of two mechanisms. Measured boot (such as TPM-based Trusted Boot) makes a tamper-proof cryptographic record of what the system booted, with each component in turn creating a measurement of the next component in the boot chain. If a component is tampered with, its measurement will be different. This can be used to either prevent the release of a cryptographic secret if the boot chain is modified (for instance, using the TPM to encrypt the disk encryption key), or can be used to attest the boot state to another device which can tell you whether you're safe or not. The other approach is verified boot (such as UEFI Secure Boot), where each component in the boot chain verifies the next component before executing it. If the verification fails, execution halts.

In both cases, each component in the boot chain measures and/or verifies the next. But something needs to be the first link in this chain, and traditionally this was the system firmware. Which means you could tamper with the system firmware and subvert the entire process - either have the firmware patch the bootloader in RAM after measuring or verifying it, or just load a modified bootloader and lie about the measurements or ignore the verification. Attackers had already been targeting the firmware (Hacking Team had something along these lines, although this was pre-secure boot so just dropped a rootkit into the OS), and given a well-implemented measured and verified boot chain, the firmware becomes an even more attractive target.

Intel's Boot Guard and AMD's Platform Secure Boot attempt to solve this problem by moving the validation of the core system firmware to an (approximately) immutable environment. Intel's solution involves the Management Engine, a separate x86 core integrated into the motherboard chipset. The ME's boot ROM verifies a signature on its firmware before executing it, and once the ME is up it verifies that the system firmware's bootblock is signed using a public key that corresponds to a hash blown into one-time programmable fuses in the chipset. What happens next depends on policy - it can either prevent the system from booting, allow the system to boot to recover the firmware but automatically shut it down after a while, or flag the failure but allow the system to boot anyway. Most policies will also involve a measurement of the bootblock being pushed into the TPM.

AMD's Platform Secure Boot is slightly different. Rather than the root of trust living in the motherboard chipset, it's in AMD's Platform Security Processor which is incorporated directly onto the CPU die. Similar to Boot Guard, the PSP has ROM that verifies the PSP's own firmware, and then that firmware verifies the system firmware signature against a set of blown fuses in the CPU. If that fails, system boot is halted. I'm having trouble finding decent technical documentation about PSB, and what I have found doesn't mention measuring anything into the TPM - if this is the case, PSB only implements verified boot, not measured boot.

What's the practical upshot of this? The first is that you can't replace the system firmware with anything that doesn't have a valid signature, which effectively means you're locked into firmware the vendor chooses to sign. This prevents replacing the system firmware with either a replacement implementation (such as Coreboot) or a modified version of the original implementation (such as firmware that disables locking of CPU functionality or removes hardware allowlists). In this respect, enforcing system firmware verification works against the user rather than benefiting them.
Of course, it also prevents an attacker from doing the same thing, but while this is a real threat to some users, I think it's hard to say that it's a realistic threat for most users.

The problem is that vendors are shipping with Boot Guard and (increasingly) PSB enabled by default. In the AMD case this causes another problem - because the fuses are in the CPU itself, a CPU that's had PSB enabled is no longer compatible with any motherboards running firmware that wasn't signed with the same key. If a user wants to upgrade their system's CPU, they're effectively unable to sell the old one. But in both scenarios, the user's ability to control what their system is running is reduced.

As I said, the threat that these technologies seek to protect against is real. If you're a large company that handles a lot of sensitive data, you should probably worry about it. If you're a journalist or an activist dealing with governments that have a track record of targeting people like you, it should probably be part of your threat model. But otherwise, the probability of you being hit by a purely userland attack is so ludicrously high compared to you being targeted this way that it's just not a big deal.

I think there's a more reasonable tradeoff than where we've ended up. Tying things like disk encryption secrets to TPM state means that if the system firmware is measured into the TPM prior to being executed, we can at least detect that the firmware has been tampered with. In this case nothing prevents the firmware being modified, there's just a record in your TPM that it's no longer the same as it was when you encrypted the secret. So, here's what I'd suggest:

1) The default behaviour of technologies like Boot Guard or PSB should be to measure the firmware signing key and whether the firmware has a valid signature into PCR 7 (the TPM register that is also used to record which UEFI Secure Boot signing key is used to verify the bootloader).
2) If the PCR 7 value changes, the disk encryption key release will be blocked, and the user will be redirected to a key recovery process. This should include remote attestation, allowing the user to be informed that their firmware signing situation has changed.
3) Tooling should be provided to switch the policy from merely measuring to verifying, and users at meaningful risk of firmware-based attacks should be encouraged to make use of this tooling

This would allow users to replace their system firmware at will, at the cost of having to re-seal their disk encryption keys against the new TPM measurements. It would provide enough information that, in the (unlikely for most users) scenario that their firmware has actually been modified without their knowledge, they can identify that. And it would allow users who are at high risk to switch to a higher security state, and for hardware that is explicitly intended to be resilient against attacks to have different defaults.

This is frustratingly close to possible with Boot Guard, but I don't think it's quite there. Before you've blown the Boot Guard fuses, the Boot Guard policy can be read out of flash. This means that you can drop a Boot Guard configuration into flash telling the ME to measure the firmware but not prevent it from running. But there are two problems remaining:

1) The measurement is made into PCR 0, and PCR 0 changes every time your firmware is updated. That makes it a bad default for sealing encryption keys.
2) It doesn't look like the policy is measured before being enforced. This means that an attacker can simply reflash modified firmware with a policy that disables measurement and then make a fake measurement that makes it look like the firmware is ok.

Fixing this seems simple enough - the Boot Guard policy should always be measured, and measurements of the policy and the signing key should be made into a PCR other than PCR 0. If an attacker modified the policy, the PCR value would change. If an attacker modified the firmware without modifying the policy, the PCR value would also change. People who are at high risk would run an app that would blow the Boot Guard policy into fuses rather than just relying on the copy in flash, and enable verification as well as measurement. Now if an attacker tampers with the firmware, the system simply refuses to boot and the attacker doesn't get anything.

Things are harder on the AMD side. I can't find any indication that PSB supports measuring the firmware at all, which obviously makes this approach impossible. I'm somewhat surprised by that, and so wouldn't be surprised if it does do a measurement somewhere. If it doesn't, there's a rather more significant problem - if a system has a socketed CPU, and someone has sufficient physical access to replace the firmware, they can just swap out the CPU as well with one that doesn't have PSB enabled. Under normal circumstances the system firmware can detect this and prompt the user, but given that the attacker has just replaced the firmware we can assume that they'd do so with firmware that doesn't decide to tell the user what just happened. In the absence of better documentation, it's extremely hard to say that PSB actually provides meaningful security benefits.

So, overall: I think Boot Guard protects against a real-world attack that matters to a small but important set of targets. I think most of its benefits could be provided in a way that still gave users control over their system firmware, while also permitting high-risk targets to opt-in to stronger guarantees. Based on what's publicly documented about PSB, it's hard to say that it provides real-world security benefits for anyone at present. In both cases, what's actually shipping reduces the control people have over their systems, and should be considered user-hostile.

[1] Assuming that someone's both turning this on and actually looking at the data produced

comment count unavailable comments

9 January 2022

Matthew Garrett: Pluton is not (currently) a threat to software freedom

At CES this week, Lenovo announced that their new Z-series laptops would ship with AMD processors that incorporate Microsoft's Pluton security chip. There's a fair degree of cynicism around whether Microsoft have the interests of the industry as a whole at heart or not, so unsurprisingly people have voiced concerns about Pluton allowing for platform lock-in and future devices no longer booting non-Windows operating systems. Based on what we currently know, I think those concerns are understandable but misplaced.

But first it's helpful to know what Pluton actually is, and that's hard because Microsoft haven't actually provided much in the way of technical detail. The best I've found is a discussion of Pluton in the context of Azure Sphere, Microsoft's IoT security platform. This, in association with the block diagrams on page 12 and 13 of this slidedeck, suggest that Pluton is a general purpose security processor in a similar vein to Google's Titan chip. It has a relatively low powered CPU core, an RNG, and various hardware cryptography engines - there's nothing terribly surprising here, and it's pretty much the same set of components that you'd find in a standard Trusted Platform Module of the sort shipped in pretty much every modern x86 PC. But unlike Titan, Pluton seems to have been designed with the explicit goal of being incorporated into other chips, rather than being a standalone component. In the Azure Sphere case, we see it directly incorporated into a Mediatek chip. In the Xbox Series devices, it's incorporated into the SoC. And now, we're seeing it arrive on general purpose AMD CPUs.

Microsoft's announcement says that Pluton can be shipped in three configurations:as the Trusted Platform Module; as a security processor used for non-TPM scenarios like platform resiliency; or OEMs can choose to ship with Pluton turned off. What we're likely to see to begin with is the former - Pluton will run firmware that exposes a Trusted Computing Group compatible TPM interface. This is almost identical to the status quo. Microsoft have required that all Windows certified hardware ship with a TPM for years now, but for cost reasons this is often not in the form of a separate hardware component. Instead, both Intel and AMD provide support for running the TPM stack on a component separate from the main execution cores on the system - for Intel, this TPM code runs on the Management Engine integrated into the chipset, and for AMD on the Platform Security Processor that's integrated into the CPU package itself.

So in this respect, Pluton changes very little; the only difference is that the TPM code is running on hardware dedicated to that purpose, rather than alongside other code. Importantly, in this mode Pluton will not do anything unless the system firmware or OS ask it to. Pluton cannot independently block the execution of any other code - it knows nothing about the code the CPU is executing unless explicitly told about it. What the OS can certainly do is ask Pluton to verify a signature before executing code, but the OS could also just verify that signature itself. Windows can already be configured to reject software that doesn't have a valid signature. If Microsoft wanted to enforce that they could just change the default today, there's no need to wait until everyone has hardware with Pluton built-in.

The two things that seem to cause people concerns are remote attestation and the fact that Microsoft will be able to ship firmware updates to Pluton via Windows Update. I've written about remote attestation before, so won't go into too many details here, but the short summary is that it's a mechanism that allows your system to prove to a remote site that it booted a specific set of code. What's important to note here is that the TPM (Pluton, in the scenario we're talking about) can't do this on its own - remote attestation can only be triggered with the aid of the operating system. Microsoft's Device Health Attestation is an example of remote attestation in action, and the technology definitely allows remote sites to refuse to grant you access unless you booted a specific set of software. But there are two important things to note here: first, remote attestation cannot prevent you from booting whatever software you want, and second, as evidenced by Microsoft already having a remote attestation product, you don't need Pluton to do this! Remote attestation has been possible since TPMs started shipping over two decades ago.

The other concern is Microsoft having control over the firmware updates. The context here is that TPMs are not magically free of bugs, and sometimes these can have security consequences. One example is Infineon TPMs producing weak RSA keys, a vulnerability that could be rectified by a firmware update to the TPM. Unfortunately these updates had to be issued by the device manufacturer rather than Infineon being able to do so directly. This meant users had to wait for their vendor to get around to shipping an update, something that might not happen at all if the machine was sufficiently old. From a security perspective, being able to ship firmware updates for the TPM without them having to go through the device manufacturer is a huge win.

Microsoft's obviously in a position to ship a firmware update that modifies the TPM's behaviour - there would be no technical barrier to them shipping code that resulted in the TPM just handing out your disk encryption secret on demand. But Microsoft already control the operating system, so they already have your disk encryption secret. There's no need for them to backdoor the TPM to give them something that the TPM's happy to give them anyway. If you don't trust Microsoft then you probably shouldn't be running Windows, and if you're not running Windows Microsoft can't update the firmware on your TPM.

So, as of now, Pluton running firmware that makes it look like a TPM just isn't a terribly interesting change to where we are already. It can't block you running software (either apps or operating systems). It doesn't enable any new privacy concerns. There's no mechanism for Microsoft to forcibly push updates to it if you're not running Windows.

Could this change in future? Potentially. Microsoft mention another use-case for Pluton "as a security processor used for non-TPM scenarios like platform resiliency", but don't go into any more detail. At this point, we don't know the full set of capabilities that Pluton has. Can it DMA? Could it play a role in firmware authentication? There are scenarios where, in theory, a component such as Pluton could be used in ways that would make it more difficult to run arbitrary code. It would be reassuring to hear more about what the non-TPM scenarios are expected to look like and what capabilities Pluton actually has.

But let's not lose sight of something more fundamental here. If Microsoft wanted to block free operating systems from new hardware, they could simply mandate that vendors remove the ability to disable secure boot or modify the key databases. If Microsoft wanted to prevent users from being able to run arbitrary applications, they could just ship an update to Windows that enforced signing requirements. If they want to be hostile to free software, they don't need Pluton to do it.

(Edit: it's been pointed out that I kind of gloss over the fact that remote attestation is a potential threat to free software, as it theoretically allows sites to block access based on which OS you're running. There's various reasons I don't think this is realistic - one is that there's just way too much variability in measurements for it to be practical to write a policy that's strict enough to offer useful guarantees without also blocking a number of legitimate users, and the other is that you can just pass the request through to a machine that is running the appropriate software and have it attest for you. The fact that nobody has actually bothered to use remote attestation for this purpose even though most consumer systems already ship with TPMs suggests that people generally agree with me on that)

comment count unavailable comments

Next.

Previous.